6 research outputs found

    Implementation of Full Adder Using CMOS And DFAL Adiabatic Logic

    Get PDF
    Power dissipation has always been a major concern in today’s world. With increase in technology, sizing and power consumption is a great analyzing parameter. Thus, each year new technologies are designed to meet the requirements using adiabatic techniques. Adder possesses importance in designing of ALU, digital signal processing, ripple counter.  Designing of adder using conventional technique (CMOS) often create complexity and sizing issue with more energy dissipation. In this way, thus structuring adder with adiabatic system to determine previously mentioned issues. Here in this paper full adder is planned first utilizing CMOS procedure and after that utilizing DFAL (diode free adiabatic rationale) method and accordingly contrasting outcomes and ordinary cmos circuit

    MEGAVERSE: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks

    Full text link
    Recently, there has been a rapid advancement in research on Large Language Models (LLMs), resulting in significant progress in several Natural Language Processing (NLP) tasks. Consequently, there has been a surge in LLM evaluation research to comprehend the models' capabilities and limitations. However, much of this research has been confined to the English language, leaving LLM building and evaluation for non-English languages relatively unexplored. There has been an introduction of several new LLMs, necessitating their evaluation on non-English languages. This study aims to expand our MEGA benchmarking suite by including six new datasets to form the MEGAVERSE benchmark. The benchmark comprises 22 datasets covering 81 languages, including low-resource African languages. We evaluate several state-of-the-art LLMs like GPT-3.5-Turbo, GPT4, PaLM2, and Llama2 on the MEGAVERSE datasets. Additionally, we include two multimodal datasets in the benchmark and assess the performance of the LLaVa-v1.5 model. Our experiments suggest that GPT4 and PaLM2 outperform the Llama models on various tasks, notably on low-resource languages, with GPT4 outperforming PaLM2 on more datasets than vice versa. However, issues such as data contamination must be addressed to obtain an accurate assessment of LLM performance on non-English languages.Comment: 23 pages, 30 figures and 1 tabl

    MEGA: Multilingual Evaluation of Generative AI

    Full text link
    Generative AI models have shown impressive performance on many Natural Language Processing tasks such as language understanding, reasoning, and language generation. An important question being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative LLMs have been restricted to English and it is unclear how capable these models are at understanding and generating text in other languages. We present the first comprehensive benchmarking of generative LLMs - MEGA, which evaluates models on standard NLP benchmarks, covering 16 NLP datasets across 70 typologically diverse languages. We compare the performance of generative LLMs including Chat-GPT and GPT-4 to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and tasks and discuss challenges in improving the performance of generative LLMs on low-resource languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.Comment: EMNLP 202
    corecore